A Hybrid Client-Based Incentive Mechanism for Federated Learning Participation Using Novel Generalization Bounds
Mohamed Salah Jebali
TU Delft
Federated Learning (FL) offers a privacy-preserving, distributed approach to training machine learning models by utilizing decentralized devices, eliminating the need to centralize sensitive data. This paradigm is particularly valuable in sectors like healthcare, finance, and consumer electronics, where data sensitivity and low latency are crucial. However, FL faces significant challenges due to data heterogeneity, including variations in data distributions and sample sizes across participating clients. These discrepancies often result in accuracy degradation, leading to client drop-out the training and triggering a harmful snowball effect. This research addresses the issue of client incentives in FL by proposing a hybrid mechanism that allows individual clients to strategically decide whether to participate in the training process. By developing evaluation functions based on novel generalization bounds derived from Rademacher complexity, we provide clients with a framework to make informed decisions about their optimal training strategies. The study explores four distinct training strategieslocal, global, mixture, and ensemble learningalong with their corresponding generalization bounds, which consider both the number of training points and data discrepancies. Our approach, which targets selfish-honest clients motivated by utility gains but still willing to cooperate, offers a quantitative means for clients to assess potential accuracy improvements and mitigate the impact of data heterogeneity in FL.